The lecture is a good effort to create a synthesis about computations in V1 and propose a new model. The lecture starts comparing the architecture of the retina with V1 and explaining the linear equation (linear addition in time and space) for ganglion cells and introducing the nonlinearity ("the tweak"). V1 does not have bottleneck as the retina and it does not have the need for redundancy reduction. Then a new stimulus set is proposed, not based on minimization of spread in space and frequency (
i.e. Gabor functions) but on 2D-Hermite functions (which allow factorize the gain control). The traditional view -based on minimization and in the stimulus that try to target some features of the model- is replaced by other based on "confinement", although the reasons for that change are not completely clear.
Two basic sets are explained: Cartesian (rectangular grid) and polar (annular or circular patterns). An additive model (linear +rectification) is used for neuronal operation. The model is tested based on the following prediction: The filters for each set should be similar. This prediction fails in some cells but is correct in others. The conclusion suggests that V1 can not be modeled properly with cascade models... the degree of long-range feedback in V1 suggests that recurrent nonlinear processing has to be modeled more explicitly.
In the last part, random binary stimulus is suggested to map V1 neurons with correlated inputs.
The lecture is clear, dynamic and interesting and the audience is invited to participate actively. Unfortunately, the participant's voice is not audible. The conclusion is reasonable. The need for this approach is however questioned very briefly in a slide for the lecturer itself: "Can we really distinguish a recurrent network from multiple parallel cascades?" True. See Synfire chains, for instance. The particular conditions at which this model can work better should then be specified. Juan F Gomez-M Ph.D.